Easy2Siksha.com
GNDU QUESTION PAPERS 2024
BA/BSc 4
th
SEMESTER
COMPUTER SCIENCE
(Data Structures & Programming Language Using C++)
Time Allowed: 3 Hours Maximum Marks: 75
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION-A
1. Discuss briey the following terms:
(a) Complexity of an algorithm
(b) Time-space trade-o of algorithm
2. How linked list can be represented in the memory? Write an algorithm to delete a node
in the linked list at a given locaon. Explain this process with the help of an example. 15
SECTION-B
3. What is Stack? How it is implemented in the memory? Explain its various operaons
with the help of an example.
4. Dene the concept of Circular Queue. How it is implemented in the memory? Explain its
various operaons with the help of an example.
Easy2Siksha.com
SECTION-C
5. Explain the concept of Binary search with the help of an example. Discuss its complexity
as well as limitaons.
6. Sort the given list of following numbers using Selecon Sort:
SECTION-D
7. Explain the concept of Overloading Member Funcons using suitable 15 example.
8. What is Inheritance? Explain the concept of Mulple Inheritance using 15 suitable
example.
Easy2Siksha.com
GNDU ANSWER PAPERS 2024
BA/BSc 4
th
SEMESTER
COMPUTER SCIENCE
(Data Structures & Programming Language Using C++)
Time Allowed: 3 Hours Maximum Marks: 75
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION-A
1. Discuss briey the following terms:
(a) Complexity of an algorithm
(b) Time-space trade-o of algorithm
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Understanding Algorithm Complexity
Imagine you have a problem to solve, like arranging books in order, finding a friend’s phone
number in a contact list, or calculating the total marks of a class. The method or step-by-
step procedure you follow to solve such a problem is called an algorithm.
Now the big question is:
󷷑󷷒󷷓󷷔 “How do we know if one algorithm is better than another?”
To answer this, we look at algorithm complexity.
󼩏󼩐󼩑 What Is Algorithm Complexity?
Algorithm complexity means:
“How much effort does an algorithm take to solve a problem?”
This effort can be measured in two main ways:
󷄧󷄫 Time Complexity How long does the algorithm take to run?
󷄧󷄬 Space Complexity How much memory does the algorithm use?
Easy2Siksha.com
Think of time complexity as “How fast can you finish your homework?”
And space complexity as “How many pages of your notebook do you use?”
When we talk about complexity, we are not counting the actual seconds or exact memory
used. Instead, we analyze how the effort grows when the size of the problem increases.
For example:
Searching one name in a list of 10 people is easy.
Searching one name in a list of 10,00,000 people is harder.
If your algorithm is efficient, it will still work fast even when the amount of data becomes
huge.
󼾗󼾘󼾛󼾜󼾙󼾚 Time Complexity The Time Needed to Solve a Problem
Time complexity tells us how the running time of an algorithm increases as the input size
grows. Instead of using seconds, we use mathematical notations such as:
O(1) Constant time (super fast!)
O(log n) Logarithmic time (efficient)
O(n) Linear time (reasonable)
O(n²) Quadratic time (slow for big data)
O(2ⁿ) – Very slow!
But don’t get scared by these symbols. Just remember:
󷷑󷷒󷷓󷷔 Lower complexity = Faster algorithm
󷷑󷷒󷷓󷷔 Higher complexity = Slower algorithm
󹲉󹲊󹲋󹲌󹲍 Example:
Imagine searching a book in a library:
If books are arranged alphabetically, you can directly jump using clues. This is faster.
If they are in a random pile, you may check every book one by one that takes
longer.
That’s exactly what time complexity explains.
󹴍󹴒󹴎󹴏󹴐󹴑 Space Complexity Memory Used by the Algorithm
Now let’s think about memory. Every algorithm stores something while working:
variables
Easy2Siksha.com
temporary values
data structures like arrays, lists, stacks, etc.
Space complexity tells us:
󷷑󷷒󷷓󷷔 “How much extra memory does an algorithm need to work?”
For example:
If you solve a math problem mentally, you use zero paper (low space).
If you solve it on a notebook using multiple pages, you use more space.
Similarly, some algorithms need very little extra memory, while others need a lot.
󽀼󽀽󽁀󽁁󽀾󽁂󽀿󽁃 TimeSpace Trade-Off The Interesting Balance
Now comes the second part: timespace trade-off.
Have you ever faced a situation like this?
You can finish homework faster if you use more pages.
Or you can use fewer pages, but it will take more time to solve.
This is exactly the idea of timespace trade-off in algorithms.
󼩺󼩻 What Does It Mean?
It means:
“Sometimes we can reduce the time taken by an algorithm if we allow it to use more
memory.
And sometimes we can save memory, but then the algorithm may become slower.”
So, there is a balance between time and space. You gain one, you lose the other. Just like in
real life!
󽆤 Simple Everyday Examples
Example 1: Studying for Exams
Suppose you write all important formulas on a separate sheet.
This sheet uses extra space (memory).
But it helps you revise quickly (saves time).
Easy2Siksha.com
If you don’t write them, you save space, but you spend more time searching through your
books.
That’s time–space trade-off!
Example 2: Google Search
Google stores huge amounts of pre-processed data in memory.
Why?
So that when you search something, results come instantly.
󷷑󷷒󷷓󷷔 More memory used
󷷑󷷒󷷓󷷔 Less time taken
If Google didn’t store data, every search would take a long time.
Example 3: Mobile Phones
Have you noticed that apps run faster on phones with more RAM?
More memory allows faster processing another example of timespace trade-off.
󷘹󷘴󷘵󷘶󷘷󷘸 Why Are These Concepts Important?
Because computers deal with huge data every day:
Banking systems
Social media
Education platforms
Hospitals
Navigation systems
AI and games
If an algorithm is slow or memory hungry, everything becomes inefficient.
So, computer scientists always try to design algorithms that:
Work fast
Use less memory
Stay efficient even with large data
But achieving both perfectly is rare. That’s why we study complexity and trade-offs to make
the best possible choice.
Easy2Siksha.com
󼫹󼫺 Final Summary
󷄧󼿒 Complexity of an Algorithm
It means:
󷷑󷷒󷷓󷷔 How much time and memory an algorithm needs to solve a problem.
It mainly includes:
Time Complexity (speed)
Space Complexity (memory)
We use it to compare which algorithm is better.
󷄧󼿒 TimeSpace Trade-Off
It means:
󷷑󷷒󷷓󷷔 If we want an algorithm to run faster, we may need to give it more memory.
󷷑󷷒󷷓󷷔 If we want to save memory, it may take more time.
So, time and space are like two sides of a balance. Improving one may worsen the other.
󷊻󷊼󷊽 Conclusion
Understanding algorithm complexity and timespace trade-off is like understanding how
efficiently we can solve a problem. Just like in real life we plan our time and resources,
computer scientists plan how much time and memory an algorithm should use. These
concepts help us build better, faster, and smarter computer systems.
2. How linked list can be represented in the memory? Write an algorithm to delete a node
in the linked list at a given locaon. Explain this process with the help of an example.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Linked List Representation and Node Deletion
󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
When we study data structures, one of the most fascinating concepts is the linked list.
Unlike arrays, which store elements in contiguous memory locations, linked lists are
dynamic structures where each element (called a node) is connected to the next through a
Easy2Siksha.com
pointer. This makes linked lists flexible, efficient for insertion and deletion, and widely used
in programming.
󷷑󷷒󷷓󷷔 In simple words: A linked list is like a chain of people holding handseach person knows
who comes next, and the chain can grow or shrink easily.
󷈷󷈸󷈹󷈺󷈻󷈼 Representation of Linked List in Memory
A linked list node typically has two parts:
1. Data field: Stores the actual value (like a number, string, or object).
2. Pointer (or link) field: Stores the address of the next node in the list.
Structure in C-like pseudocode:
c
struct Node {
int data; // stores the value
struct Node* next; // pointer to the next node
};
The head pointer points to the first node of the list.
The last node has its pointer set to NULL, indicating the end of the list.
󷷑󷷒󷷓󷷔 Imagine memory as a big playground. Each node is like a child standing somewhere in
the playground. They don’t stand in a line (like arrays do), but each child points to the next
one, forming a chain.
󷈷󷈸󷈹󷈺󷈻󷈼 Types of Linked Lists
Singly Linked List: Each node points to the next node only.
Doubly Linked List: Each node has two pointersone to the next node and one to
the previous node.
Circular Linked List: The last node points back to the first node, forming a circle.
For simplicity, we’ll focus on singly linked lists here.
󷈷󷈸󷈹󷈺󷈻󷈼 Algorithm to Delete a Node at a Given Location
Deleting a node means removing it from the chain and reconnecting the links so that the list
remains intact.
Steps of the Algorithm:
1. Start at the head node.
2. Traverse the list until you reach the node just before the one you want to delete.
Easy2Siksha.com
3. Change the pointer of this previous node to skip the unwanted node and point to
the next node instead.
4. Free the memory of the deleted node (important in languages like C/C++).
Pseudocode for Deletion:
c
void deleteNode(struct Node** head, int position) {
// If linked list is empty
if (*head == NULL) return;
struct Node* temp = *head;
// If head needs to be removed
if (position == 0) {
*head = temp->next; // move head
free(temp); // free memory
return;
}
// Find previous node of the node to be deleted
for (int i = 0; temp != NULL && i < position - 1; i++) {
temp = temp->next;
}
// If position is more than number of nodes
if (temp == NULL || temp->next == NULL) return;
// Node temp->next is the node to be deleted
struct Node* next = temp->next->next;
free(temp->next); // free memory
temp->next = next; // unlink the deleted node
}
󷈷󷈸󷈹󷈺󷈻󷈼 Example to Understand the Process
Let’s say we have a linked list:
Code
Head → [10 | next] → [20 | next] → [30 | next] [40 | next] → NULL
Here:
The first node stores 10.
The second node stores 20.
Easy2Siksha.com
The third node stores 30.
The fourth node stores 40.
Task: Delete the node at position 2 (which stores 30).
Step-by-Step Process:
1. Start at the head (10).
2. Traverse to the node just before position 2 → that’s the node with 20.
3. Change the pointer of 20 so that it points to the node after 30 (i.e., 40).
4. Free the memory of the node containing 30.
Resulting Linked List:
Code
Head → [10 | next] → [20 | next] → [40 | next] NULL
󷷑󷷒󷷓󷷔 Notice how the chain is still intact, but the node with 30 has been removed.
󷈷󷈸󷈹󷈺󷈻󷈼 Everyday Analogy
Think of a linked list as a group of friends standing in a line, each holding the hand of the
next person. If one friend (say the third) leaves, the second friend simply holds the hand of
the fourth friend. The line remains connected, even though one person is gone.
󷈷󷈸󷈹󷈺󷈻󷈼 Advantages of Linked Lists
Dynamic size: Can grow or shrink easily.
Efficient insertion/deletion: No need to shift elements like in arrays.
󷈷󷈸󷈹󷈺󷈻󷈼 Disadvantages of Linked Lists
Extra memory: Each node needs extra space for the pointer.
Sequential access: You can’t directly access the nth element; you must traverse from
the head.
󷈷󷈸󷈹󷈺󷈻󷈼 Conclusion
Linked lists are powerful data structures that represent elements as nodes connected by
pointers. In memory, each node stores data and a link to the next node. Deleting a node
involves carefully reconnecting the chain so that the list remains intact. Through our
example, we saw how removing the node with 30 required adjusting pointers but kept the
list functional.
Easy2Siksha.com
SECTION-B
3. What is Stack? How it is implemented in the memory? Explain its various operaons
with the help of an example.
Ans: What is a Stack?
Imagine you have a pile of books kept one above another. You can only remove the book
from the top, not from the middle or bottom. If you want to add a new book, you place it
only on the top of the pile. This is exactly how a Stack works in computer science.
A Stack is a linear data structure that follows a special rule:
LIFO Last In, First Out
This means:
The last item you put into the stack is the first one that comes out.
Just like the last book kept on the top is the first book you will remove.
Think of real-life examples:
Stack of plates
Stack of clothes
Undo operations in MS Word
Back button in web browsers
In all these cases, whatever comes last is accessed first.
So simply,
A Stack is like a container where insertion and deletion happen only at one end called TOP.
How is Stack Implemented in Memory?
Now let us understand how Stack exists inside the computer.
Stack can be implemented using:
󷄧󷄫 Array
󷄧󷄬 Linked List
But for most theoretical explanations, we consider Array implementation.
Stack in Memory Using Array
Easy2Siksha.com
Imagine consecutive blocks of memory like boxes arranged in a line. Each box can store one
value. We also keep a special pointer called TOP, which tells us:
Where is the last inserted element?
Is stack empty?
Is stack full?
Initially, when Stack is empty:
TOP = -1
Whenever we insert an element, TOP increases. Whenever we remove, TOP decreases.
Memory Representation
Suppose stack size = 5
Index
Value
0
1
2
3
4
Initially:
TOP = -1
Stack is Empty
If we insert 10:
TOP = 0
If we insert 20:
TOP = 1
And so on…
The stack grows like this:
Bottom → 10 → 20 → 30 → Top
Operations on Stack
A Stack mainly supports 4 important operations.
Easy2Siksha.com
1. PUSH Operation (Insertion)
PUSH means inserting a new element into the stack. But remember, insertion is allowed
only at the TOP.
Steps:
󷄧󷄫 Check if stack is full (overflow condition)
󷄧󷄬 If not full, increase TOP
󷄧󷄭 Place new element at TOP
If Stack is full and we still try to PUSH something, it is called:
STACK OVERFLOW
This simply means memory is full and no more elements can be added.
Example
Suppose stack is empty:
[]
TOP = -1
Push 5 →
[5]
TOP = 0
Push 10 →
[5, 10]
TOP = 1
Push 20 →
[5, 10, 20]
TOP = 2
Latest entry 20 is always on TOP.
2. POP Operation (Deletion)
POP means removing element from Stack. Again, removal happens only from TOP.
Easy2Siksha.com
Steps:
󷄧󷄫 Check if stack is empty (underflow condition)
󷄧󷄬 Remove the element at TOP
󷄧󷄭 Decrease TOP
If Stack is empty and we try to POP, it becomes:
STACK UNDERFLOW
Meaning: There is nothing left to remove.
Continuing the Previous Example
Current Stack:
[5, 10, 20]
TOP = 2
POP →
Removes 20
[5, 10]
TOP = 1
POP →
Removes 10
[5]
TOP = 0
POP →
Removes 5
[]
TOP = -1
Stack becomes Empty.
3. PEEK / TOP Operation
Peek means viewing the topmost element without removing it.
From our earlier stack:
[5, 10, 20]
TOP = 2
Easy2Siksha.com
PEEK will show:
20
This helps when we want to know what is currently on the top.
4. isEmpty() and isFull() Operations
These are supportive checking functions.
isEmpty()
Returns TRUE if:
TOP == -1
isFull()
Returns TRUE if:
TOP == MAX SIZE - 1
Real-Life Example to Understand Everything Together
Let us imagine you are browsing the internet.
You open pages in following order:
1. Google
2. YouTube
3. Instagram
Browser keeps these pages in a Stack.
Stack becomes:
Bottom → Google → YouTube → Instagram (Top)
Now if you press BACK button:
Instagram will close first (POP)
Then YouTube
Then Google last
Easy2Siksha.com
You cannot directly jump to Google without closing Instagram and YouTube because Stack
follows LIFO.
Why is Stack Important?
Stack is not just a theory topic; it is widely used in:
Function calling in programming
Expression evaluation
Undo/Redo Operations
Parentheses checking in compilers
Memory management
Backtracking algorithms (like solving mazes)
So stack is a backbone of many logical operations in computers.
Summary in Simple Points
Stack is a linear data structure
Works on Last In, First Out (LIFO) rule
Uses a pointer called TOP
Implemented using Array or Linked List
Main operations:
PUSH → Insert at top
POP → Remove from top
PEEK → Show top element
isEmpty / isFull → Status check
Overflow occurs when stack is full
Underflow occurs when stack is empty
Final Thoughts
A stack may look like a simple structure, but it is extremely powerful. It makes the computer
handle tasks in a disciplined and organized way. Just like we manage plates in a pile, the
computer manages data using stacks. When you understand Stack, you understand how
your programs, applications, and even your mobile phone internally manage tasks.
Easy2Siksha.com
4. Dene the concept of Circular Queue. How it is implemented in the memory? Explain its
various operaons with the help of an example.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Circular Queue Concept, Implementation, and Operations
󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
When we study data structures, one of the most interesting topics is the queue. A queue
works on the principle of FIFO (First In, First Out)the element that enters first is the one
that leaves first, just like people standing in line at a ticket counter. But ordinary queues
have a limitation: once the rear pointer reaches the end of the array, no more elements can
be added, even if there is space at the front due to deletions.
󷷑󷷒󷷓󷷔 To solve this problem, computer scientists introduced the Circular Queue.
In simple words: A circular queue is like a merry-go-round. When you reach the last seat,
you don’t stop—you circle back to the first seat if it’s empty.
󷈷󷈸󷈹󷈺󷈻󷈼 Definition of Circular Queue
A Circular Queue is a linear data structure in which the last position is connected back to the
first position, forming a circle. It allows efficient utilization of memory by reusing empty
spaces created by deletions.
It follows the FIFO principle.
The queue has two pointers:
o Front: Points to the first element.
o Rear: Points to the last element.
When the rear reaches the end of the array, it wraps around to the beginning if
there is space.
󷈷󷈸󷈹󷈺󷈻󷈼 Implementation in Memory
Circular queues are usually implemented using arrays or linked lists.
1. Array Implementation
An array of fixed size is used.
Two variables, front and rear, keep track of positions.
When rear reaches the end, it wraps around using the formula:
𝑟𝑒𝑎𝑟 = (𝑟𝑒𝑎𝑟 + 1)%𝑠𝑖𝑧𝑒
Similarly, for deletion:
𝑓𝑟𝑜𝑛𝑡 = (𝑓𝑟𝑜𝑛𝑡 + 1)%𝑠𝑖𝑧𝑒
Easy2Siksha.com
󷷑󷷒󷷓󷷔 % is the modulo operator, which ensures that the index cycles back to the beginning.
2. Linked List Implementation
Each node contains data and a pointer to the next node.
The last node points back to the first node, forming a circle.
This approach allows dynamic size but is more complex.
󷈷󷈸󷈹󷈺󷈻󷈼 Operations on Circular Queue
Let’s explore the main operations step by step:
1. Enqueue (Insertion)
Adds an element at the rear.
If the queue is full, insertion is not possible.
Formula:
𝑟𝑒𝑎𝑟 = (𝑟𝑒𝑎𝑟 + 1)%𝑠𝑖𝑧𝑒
2. Dequeue (Deletion)
Removes an element from the front.
If the queue is empty, deletion is not possible.
Formula:
𝑓𝑟𝑜𝑛𝑡 = (𝑓𝑟𝑜𝑛𝑡 + 1)%𝑠𝑖𝑧𝑒
3. Peek (Front Element)
Returns the element at the front without removing it.
4. IsFull
Checks if the queue is full.
Condition:
(𝑟𝑒𝑎𝑟 + 1)%𝑠𝑖𝑧𝑒 == 𝑓𝑟𝑜𝑛𝑡
5. IsEmpty
Checks if the queue is empty.
Condition:
𝑓𝑟𝑜𝑛𝑡 == −1
󷈷󷈸󷈹󷈺󷈻󷈼 Example to Understand
Easy2Siksha.com
Suppose we have a circular queue of size 5.
Initially:
Code
Front = -1, Rear = -1
Queue = [ , , , , ]
Step 1: Enqueue 10
Code
Front = 0, Rear = 0
Queue = [10, , , , ]
Step 2: Enqueue 20
Code
Front = 0, Rear = 1
Queue = [10, 20, , , ]
Step 3: Enqueue 30
Code
Front = 0, Rear = 2
Queue = [10, 20, 30, , ]
Step 4: Dequeue (removes 10)
Code
Front = 1, Rear = 2
Queue = [ , 20, 30, , ]
Step 5: Enqueue 40, 50, 60
After inserting 40 and 50:
Code
Front = 1, Rear = 4
Queue = [ , 20, 30, 40, 50 ]
Now inserting 60: Rear wraps around to index 0.
Code
Front = 1, Rear = 0
Queue = [60, 20, 30, 40, 50]
󷷑󷷒󷷓󷷔 Notice how the queue reused the empty space at index 0. This is the beauty of circular
queues.
Easy2Siksha.com
󷈷󷈸󷈹󷈺󷈻󷈼 Everyday Analogy
Imagine a circular queue as a Ferris wheel. People get on at the front and off at the front.
When the wheel rotates, the seats are reused. Even if one seat becomes empty, the wheel
ensures it can be filled again without wasting space.
󷈷󷈸󷈹󷈺󷈻󷈼 Advantages of Circular Queue
Efficient use of memory.
No need to shift elements after deletion.
Simple implementation using arrays or linked lists.
󷈷󷈸󷈹󷈺󷈻󷈼 Disadvantages
Fixed size in array implementation.
Slightly more complex logic compared to linear queues.
󷈷󷈸󷈹󷈺󷈻󷈼 Conclusion
A Circular Queue is a smart variation of the ordinary queue that solves the problem of
wasted space. Implemented using arrays or linked lists, it uses modulo arithmetic to wrap
around indices. Its operationsenqueue, dequeue, peek, isFull, and isEmptymake it a
powerful tool in computer science.
SECTION-C
5. Explain the concept of Binary search with the help of an example. Discuss its complexity
as well as limitaons.
Ans Imagine you are looking for a friend’s name in a telephone directory or a dictionary.
What do you do?
Do you start from the very first page and keep turning pages one by one until you find it? Of
course not! That would take forever.
Instead, you directly open the book somewhere in the middle.
If the name you want comes alphabetically before the names on that page, you flip back.
If it comes after, you flip forward.
And then again you repeat the same idea with the remaining half.
This clever way of reducing your search area into half each time is exactly what Binary
Search does.
So binary search works on a simple golden rule:
Easy2Siksha.com
Don’t search the whole list.
Keep dividing it into halves until you find your target.
But there is one important condition:
The list must already be sorted (in ascending or descending order).
How Binary Search Works Step-by-Step with Example
Let us take a simple example. Suppose you have the following sorted list of numbers:
5, 9, 12, 18, 23, 30, 45, 50
And you want to find the number 23.
Step 1: Find the Middle
First, we check the middle element of the list.
Middle element here is 18.
Is 18 equal to 23?
No.
Now binary search asks a question:
Is 23 greater than 18 or less than 18?
Since 23 is greater, we know that it cannot be in the left half.
So, we eliminate the left half completely.
Now remaining list is:
23, 30, 45, 50
Step 2: Again Find the Middle of Remaining List
Now we again pick the middle value of the remaining part.
Middle element here is 30.
Is 30 equal to 23?
No.
Easy2Siksha.com
Is 23 smaller than 30?
Yes!
So we eliminate the right half.
Now remaining list is:
23
Step 3: Check the Final Element
Now only one element is left.
Is it 23?
Yes!
So the search is successful we found the number!
Binary Search in Human Language
Binary search behaves like a smart detective:
1. It doesn’t waste time checking everything.
2. It directly jumps to the center.
3. Depending on comparison, it throws away half of the suspects.
4. Then it repeats this process until the truth is found.
That is why it is fast and powerful.
Binary Search Algorithm (In Simple Words)
1. Start with the first index and last index.
2. Find the middle index.
3. Compare middle element with the target value.
4. If both are equal → Success!
5. If target is smaller → search in the left half.
6. If target is larger → search in the right half.
7. Repeat steps until element is found or nothing is left.
Time Complexity of Binary Search
Easy2Siksha.com
Time complexity basically tells us how fast or slow an algorithm works.
Think of it like this:
In linear search, if you want to find a number in a list of 1,000 elements, in the worst case
you may check all 1,000 elements.
But binary search keeps reducing the list into half each time.
Let’s see how:
1000 → 500 → 250 → 125 → 62 → 31 → 15 → 7 → 3 → 1
So instead of 1000 checks, binary search only takes around 10 steps.
Amazing, right?
So its time complexity is:
O(log₂ n)
This means the number of steps increases very slowly even if the list is very large.
For example:
For 1,000 elements → around 10 steps
For 1,000,000 elements → around 20 steps
That's why binary search is extremely powerful and efficient.
Space complexity of basic binary search is:
O(1)
because it does not need extra memory (except in recursive form, where it becomes O(log
n)).
Limitations of Binary Search
Even though binary search is powerful, it is not perfect. It has some limitations:
1. List Must Be Sorted
This is the biggest condition.
Binary search will never work on an unsorted list.
Easy2Siksha.com
If the data is unsorted, you must sort it first. Sorting itself takes time, so sometimes sorting
may reduce the benefit.
2. Works Best on Random Access Structures
Binary search works best on arrays where you can directly jump to the middle element.
It is not efficient for:
Linked lists
Sequential access structures
Because you cannot directly jump to the middle element.
3. Implementation Can Be Tricky
While the idea is simple, writing correct binary search code can be challenging, especially
handling boundaries (low, high, mid).
4. Not Useful for Small Lists
If the list is very small, binary search gives no big advantage compared to simple linear
search.
5. Dynamic Changing Data
If data changes frequently (insertion or deletion happens a lot), then maintaining sorted
order becomes difficult.
Why Binary Search Is Important?
Binary search is not just a topic for exams.
It is used in real-world systems:
Searching names in phonebook
Searching records in databases
Finding words in dictionaries
Used in gaming algorithms
Easy2Siksha.com
Used in machine learning and AI searching
Used in Competitive Programming
Used in Operating Systems scheduling and indexing
So understanding binary search means understanding “smart searching”.
Final Conclusion
Binary search is a wonderful technique that teaches us to think smart. Instead of checking
every element, it narrows down the search area by repeatedly dividing the list into halves. It
is extremely fast with a time complexity of O(log n), which makes it far more efficient than
linear search.
However, it requires the list to be sorted and works best on structures where direct access
to elements is possible. Still, despite its limitations, binary search remains one of the most
important and fundamental algorithms in computer science.
6. Sort the given list of following numbers using Selecon Sort:
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Selection Sort Concept, Algorithm, and Example
󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
Sorting is one of the most fundamental operations in computer science. Whether you are
arranging names alphabetically, organizing exam scores, or structuring data for efficient
searching, sorting plays a crucial role. Among the many sorting techniques, Selection Sort is
one of the simplest and most intuitive.
󷷑󷷒󷷓󷷔 In simple words: Selection Sort is like arranging books on a shelf by repeatedly picking
the smallest book left and placing it in order.
󷈷󷈸󷈹󷈺󷈻󷈼 What is Selection Sort?
Selection Sort is a comparison-based sorting algorithm. It works by repeatedly selecting the
smallest (or largest, depending on order) element from the unsorted portion of the list and
placing it at the beginning of the sorted portion.
Approach: Divide the list into two partssorted and unsorted.
Process: Find the minimum element in the unsorted part and swap it with the first
unsorted element.
Repeat: Continue until the entire list is sorted.
Easy2Siksha.com
󷷑󷷒󷷓󷷔 It’s called “Selection Sort” because at each step, we select the smallest element.
󷈷󷈸󷈹󷈺󷈻󷈼 Step-by-Step Algorithm
1. Start with the first element.
2. Search the entire list to find the smallest element.
3. Swap this smallest element with the first element.
4. Move to the second position and repeat the process for the remaining list.
5. Continue until all elements are sorted.
Pseudocode:
c
for i = 0 to n-1:
min_index = i
for j = i+1 to n:
if arr[j] < arr[min_index]:
min_index = j
swap(arr[i], arr[min_index])
󷈷󷈸󷈹󷈺󷈻󷈼 Example Sorting a List
Let’s take a list of numbers:
Code
[64, 25, 12, 22, 11]
We want to sort them in ascending order using Selection Sort.
Pass 1:
Current list: [64, 25, 12, 22, 11]
Smallest element in the list is 11.
Swap 11 with the first element 64.
Result: [11, 25, 12, 22, 64]
Pass 2:
Current list: [11, 25, 12, 22, 64]
Smallest element in the remaining list [25, 12, 22, 64] is 12.
Swap 12 with 25.
Result: [11, 12, 25, 22, 64]
Pass 3:
Current list: [11, 12, 25, 22, 64]
Smallest element in [25, 22, 64] is 22.
Easy2Siksha.com
Swap 22 with 25.
Result: [11, 12, 22, 25, 64]
Pass 4:
Current list: [11, 12, 22, 25, 64]
Smallest element in [25, 64] is 25.
Swap 25 with itself (no change).
Result: [11, 12, 22, 25, 64]
Pass 5:
Only one element left (64), so the list is already sorted.
󷷑󷷒󷷓󷷔 Final Sorted List:
Code
[11, 12, 22, 25, 64]
󷈷󷈸󷈹󷈺󷈻󷈼 Everyday Analogy
Imagine you are organizing a group of students by height. You look at the entire group, find
the shortest student, and place them at the front. Then you look at the remaining students,
find the next shortest, and place them second. You keep repeating until everyone is lined up
in order. That’s exactly how Selection Sort works.
󷈷󷈸󷈹󷈺󷈻󷈼 Characteristics of Selection Sort
1. Time Complexity:
o Best case: 𝑂(𝑛
2
)
o Worst case: 𝑂(𝑛
2
)
o Average case: 𝑂(𝑛
2
)󷷑󷷒󷷓󷷔 Because it always scans the entire unsorted list to
find the minimum.
2. Space Complexity:
o 𝑂(1)(in-place sorting, no extra memory required).
3. Stability:
o Not stable (equal elements may change relative order).
4. Simplicity:
o Easy to understand and implement.
󷈷󷈸󷈹󷈺󷈻󷈼 Advantages of Selection Sort
Simple and intuitive.
Works well for small lists.
Requires minimal memory.
󷈷󷈸󷈹󷈺󷈻󷈼 Disadvantages
Easy2Siksha.com
Inefficient for large datasets due to 𝑂(𝑛
2
)time complexity.
Not stable, which can be problematic in certain applications.
󷈷󷈸󷈹󷈺󷈻󷈼 Another Example for Clarity
Let’s sort [29, 10, 14, 37, 13].
Pass 1: Smallest is 10. Swap with 29. → [10, 29, 14, 37, 13]
Pass 2: Smallest in remaining [29, 14, 37, 13] is 13. Swap with 29. → [10, 13, 14, 37,
29]
Pass 3: Smallest in [14, 37, 29] is 14. Swap with itself. → [10, 13, 14, 37, 29]
Pass 4: Smallest in [37, 29] is 29. Swap with 37. → [10, 13, 14, 29, 37]
Pass 5: Last element already sorted.
󷷑󷷒󷷓󷷔 Final Sorted List: [10, 13, 14, 29, 37]
󷈷󷈸󷈹󷈺󷈻󷈼 Conclusion
Selection Sort is a straightforward algorithm that works by repeatedly selecting the smallest
element and placing it in order. Though not efficient for large datasets, it is an excellent way
to understand the basics of sorting and algorithm design.
SECTION-D
7. Explain the concept of Overloading Member Funcons using suitable example.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 What is Overloading Member Functions?
Overloading member functions means having multiple functions with the same name
inside the same class, but each function works differently because:
󷄧󷄫 They have different number of parameters
󷄧󷄬 Or they have different types of parameters
󷄧󷄭 Or they have different order of parameters
However, they cannot differ only by return type.
In simple words:
Function overloading allows a class to have more than one function with the same name but
different behaviors.
Easy2Siksha.com
󷊆󷊇 Why do we need Overloading Member Functions?
Think of real-life situations:
You call someone’s name in a friendly tone → they smile
You call the same name loudly in panic → they rush to help
You whisper the same name softly → they come closer quietly
Same name… different reactions based on context.
Programming is similar. Sometimes we want a function to do similar work but in slightly
different ways. Instead of creating different names like:
addTwoNumbers()
addThreeNumbers()
addFloatNumbers()
We can simply use:
add()
add()
add()
Cleaner, professional, and easier to remember.
󷘹󷘴󷘵󷘶󷘷󷘸 Key Points to Remember
All overloaded functions must be inside the same class.
Function name must be the same.
Parameters must be different (type/number/order).
Return type alone cannot differentiate overloaded functions.
Compiler decides which function to call based on arguments provided.
󹶆󹶚󹶈󹶉 Let’s Understand with a Simple C++ Example
Let us take a class Calculator which performs addition in different ways.
#include <iostream>
using namespace std;
class Calculator {
public:
Easy2Siksha.com
// Function to add two integers
int add(int a, int b) {
return a + b;
}
// Function to add three integers
int add(int a, int b, int c) {
return a + b + c;
}
// Function to add two floating numbers
float add(float a, float b) {
return a + b;
}
};
int main() {
Calculator c;
cout << "Sum of 2 integers: " << c.add(10, 20) << endl;
cout << "Sum of 3 integers: " << c.add(10, 20, 30) << endl;
cout << "Sum of 2 floats: " << c.add(5.5f, 4.5f) << endl;
return 0;
}
󼩏󼩐󼩑 What Happened Here?
We used the same function name add() three times inside the same class.
But:
First add() → takes 2 integers
Second add() → takes 3 integers
Third add() → takes 2 float values
Even though their names are the same, they behave differently. The compiler is intelligent.
It looks at the arguments:
If we pass:
c.add(10, 20);
It matches:
int add(int, int)
Easy2Siksha.com
If we pass:
c.add(10, 20, 30);
It matches:
int add(int, int, int)
If we pass:
c.add(5.5f, 4.5f);
It matches:
float add(float, float)
This is called Member Function Overloading.
󼩺󼩻 Another Real-Life Example
Think of a Bank System. You may want to display account details. But different users might
provide different details.
So we can create a class:
class Bank {
public:
void display(int accountNumber) {
cout << "Account Number: " << accountNumber;
}
void display(string name) {
cout << "Account Holder: " << name;
}
void display(int accountNumber, string name) {
cout << "Account Number: " << accountNumber;
cout << " Name: " << name;
}
};
Here, the same function display() is used to show:
Only account number
Only account holder name
Both name and account number
Easy2Siksha.com
This makes the program flexible, user-friendly, and logical.
󷇮󷇭 Linking Concept to Real Programming
Overloading member functions is widely used in:
Mathematics-related classes
Banking applications
Billing systems
Gaming programs
Library management systems
Real-time software
Why programmers love it:
It reduces confusion.
It avoids unnecessary multiple function names.
It makes programs clean and readable.
It supports the idea of polymorphism (one name, many forms).
󽁔󽁕󽁖 What You Should NOT Do
You cannot overload functions by just changing the return type. For example:
int test(){
return 10;
}
float test(){
return 10.5;
}
This will cause an error because parameters are same and only return type changed. The
compiler gets confused “Which one should I call?”
So remember:
Parameters must differ. Return type alone is not enough.
󼫹󼫺 Final Summary (Easy to Remember)
Easy2Siksha.com
Overloading member functions means using the same function name multiple times
inside a class with different parameters.
It is a great example of compile-time polymorphism.
Function name is same, work is similar but execution varies depending on passed
arguments.
Improves readability and efficiency of code.
Common in practical programming.
󽇐 Simple One-Line Definition
Overloading member functions means defining multiple functions with the same name but
different parameter lists inside the same class, so that they perform similar but distinct
tasks.
8. What is Inheritance? Explain the concept of Mulple Inheritance using suitable
example.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
When we talk about Object-Oriented Programming (OOP), one of the most powerful ideas
is inheritance. Just like in real life, where children inherit traits from their parents, in
programming, classes can inherit properties and behaviors from other classes. This makes
code reusable, organized, and easier to maintain.
󷷑󷷒󷷓󷷔 In simple words: Inheritance allows us to create new classes based on existing ones,
saving effort and avoiding repetition.
󷈷󷈸󷈹󷈺󷈻󷈼 What is Inheritance?
Inheritance is a mechanism in OOP where one class (called the child class or subclass)
derives properties and methods from another class (called the parent class or superclass).
Parent Class (Superclass): The original class that provides attributes and methods.
Child Class (Subclass): The new class that inherits from the parent and can add its
own features.
Everyday Analogy
Think of a “Vehicle” class. All vehicles have common features like wheels, engine, and fuel.
Now, if we create a “Car” class, it automatically inherits these features from “Vehicle” but
can also add its own (like air-conditioning or music system).
󷈷󷈸󷈹󷈺󷈻󷈼 Types of Inheritance
Easy2Siksha.com
1. Single Inheritance: A child class inherits from one parent class.
2. Multilevel Inheritance: A class inherits from another class, which itself inherits from
a third class.
3. Hierarchical Inheritance: Multiple child classes inherit from a single parent class.
4. Multiple Inheritance: A child class inherits from more than one parent class.
󷷑󷷒󷷓󷷔 Our focus here is on Multiple Inheritance, which is both powerful and tricky.
󷈷󷈸󷈹󷈺󷈻󷈼 Multiple Inheritance Concept
Multiple Inheritance means that a single class can inherit features from more than one
parent class.
This allows combining functionalities of different classes into one.
However, it can also create confusion if two parent classes have methods with the
same name (known as the Diamond Problem).
Example in Real Life
Imagine a child who inherits musical talent from their mother and athletic skills from their
father. The child now has both abilities. Similarly, in programming, a class can inherit
features from multiple parent classes.
󷈷󷈸󷈹󷈺󷈻󷈼 Example in Programming (Python)
Let’s take a simple example in Python to understand multiple inheritance:
python
# Parent Class 1
class Father:
def skill(self):
print("Father's skill: Playing football")
# Parent Class 2
class Mother:
def skill(self):
print("Mother's skill: Singing")
# Child Class inheriting from both
class Child(Father, Mother):
def talent(self):
print("Child's talent: Painting")
# Create object of Child
c = Child()
c.skill() # Which skill will be called?
c.talent()
Easy2Siksha.com
Output:
Code
Father's skill: Playing football
Child's talent: Painting
󷷑󷷒󷷓󷷔 Notice that the child inherited from both parents. But when both parents had a method
with the same name (skill), Python followed the Method Resolution Order (MRO) and chose
the first parent (Father).
󷈷󷈸󷈹󷈺󷈻󷈼 The Diamond Problem
One of the challenges of multiple inheritance is the Diamond Problem.
Example:
A
/ \
B C
\ /
D
Class D inherits from both B and C.
Both B and C inherit from A.
If D calls a method from A, the compiler may get confused about whether to use the
version from B or C.
󷷑󷷒󷷓󷷔 Different languages solve this differently:
C++: Uses virtual inheritance to avoid duplication.
Python: Uses Method Resolution Order (MRO) to decide the path.
󷈷󷈸󷈹󷈺󷈻󷈼 Advantages of Multiple Inheritance
1. Code Reusability: You can reuse features of multiple classes without rewriting.
2. Flexibility: A child class can combine functionalities from different sources.
3. Rich Functionality: Helps in modeling complex real-world scenarios.
󷈷󷈸󷈹󷈺󷈻󷈼 Disadvantages of Multiple Inheritance
1. Complexity: It can make code harder to understand.
2. Diamond Problem: Ambiguity when two parent classes have the same method.
3. Maintenance Issues: Debugging becomes more difficult.
󷈷󷈸󷈹󷈺󷈻󷈼 Everyday Analogy for Multiple Inheritance
Easy2Siksha.com
Imagine you are learning cooking from your mother and carpentry from your father. You
inherit both skills. But if both parents teach you “time management” differently, you may
get confused about which method to follow. That’s exactly what happens in multiple
inheritance when two parent classes have the same method.
󷈷󷈸󷈹󷈺󷈻󷈼 Example in C++
cpp
#include <iostream>
using namespace std;
class Father {
public:
void skill() {
cout << "Father's skill: Driving" << endl;
}
};
class Mother {
public:
void skill() {
cout << "Mother's skill: Dancing" << endl;
}
};
class Child : public Father, public Mother {
public:
void talent() {
cout << "Child's talent: Painting" << endl;
}
};
int main() {
Child c;
c.Father::skill(); // Resolving ambiguity
c.Mother::skill();
c.talent();
return 0;
}
Output:
Code
Father's skill: Driving
Mother's skill: Dancing
Child's talent: Painting
Easy2Siksha.com
󷷑󷷒󷷓󷷔 In C++, we can explicitly specify which parent’s method to call, solving the ambiguity.
󷈷󷈸󷈹󷈺󷈻󷈼 Conclusion
Inheritance is a cornerstone of object-oriented programming, allowing classes to reuse and
extend existing code. Multiple Inheritance is a powerful feature where a class can inherit
from more than one parent, combining their functionalities. While it offers flexibility and
richness, it also introduces challenges like the diamond problem. Different languages handle
these challenges in different ways, ensuring that programmers can use multiple inheritance
effectively.
This paper has been carefully prepared for educaonal purposes. If you noce any
mistakes or have suggesons, feel free to share your feedback.